440 research outputs found
Simulations of Extreme-Mass-Ratio Inspirals Using Pseudospectral Methods
Extreme-mass-ratio inspirals (EMRIs), stellar-mass compact objects (SCOs)
inspiralling into a massive black hole, are one of the main sources of
gravitational waves expected for the Laser Interferometer Space Antenna (LISA).
To extract the EMRI signals from the expected LISA data stream, which will also
contain the instrumental noise as well as other signals, we need very accurate
theoretical templates of the gravitational waves that they produce. In order to
construct those templates we need to account for the gravitational
backreaction, that is, how the gravitational field of the SCO affects its own
trajectory. In general relativity, the backreaction can be described in terms
of a local self-force, and the foundations to compute it have been laid
recently. Due to its complexity, some parts of the calculation of the
self-force have to be performed numerically. Here, we report on an ongoing
effort towards the computation of the self-force based on time-domain
multi-grid pseudospectral methods.Comment: 6 pages, 4 figures, JPCS latex style. Submitted to JPCS (special
issue for the proceedings of the 7th International LISA Symposium
Effect of static load models on hopf bifurcation point and critical modes of power systems
This paper presents the effect of different static load models on Hopf bifurcation point and critical eigenvalues of power systems. Three most commonly used static load models are investigated thoroughly under various operating conditions and with different power system controllers. Some interesting new observations hase emerged in the damping ratio of the critical mode, especially when power system controllers are introduced for control, in the system to control Hopfbifurcations. These observations would be useful in controller design for Hopf bifurcation or oscillation control
Absence of charge backscattering in the nonequilibrium current of normal-superconductor structures
We study the nonequilibrium transport properties of a
normal-superconductor-normal structure, focussing on the effect of adding an
impurity in the superconducting region. Current conservation requires the
superfluid velocity to be nonzero, causing a distortion of the quasiparticle
dispersion relation within the superconductor. For weakly reflecting interfaces
we find a regime of intermediate voltages in which Andreev transmission is the
only permitted mechanism for quasiparticles to enter the superconductor.
Impurities in the superconductor can only cause Andreev reflection of these
quasiparticles and thus cannot degrade the current. At higher voltages, a state
of gapless superconductivity develops which is sensitive to the presence of
impurities.Comment: Latex file, 11 pages, 2 figures available upon request
[email protected], to be published in Journal of Physics: Condensed Matte
Electrochemical synthesis of peroxomonophosphate using boron-doped diamond anodes
A new method for the synthesis of peroxomonophosphate, based on the use of boron-doped diamond electrodes, is described. The amount of oxidant electrogenerated depends on the characteristics of the supporting media (pH and solute concentration) and on the operating conditions (temperature and current density). Results show that the pH, between values of 1 and 5, does not influence either the electrosynthesis of peroxomonophosphate or the chemical stability of the oxidant generated. Conversely, low temperatures are required during the electrosynthesis process to minimize the thermal decomposition of peroxomonophosphate and to guarantee significant oxidant concentration. In addition, a marked influence of both the current density and the initial substrate is observed. This observation can be explained in terms of the contribution of hydroxyl radicals in the oxidation mechanisms that occur on diamond surfaces. In the assays carried out below the water oxidation potential, the generation of hydroxyl radicals did not take place. In these cases, peroxomonophosphate generation occurs through a direct electron transfer and, therefore, at these low current densities lower concentrations are obtained. On the other hand, at higher potentials both direct and hydroxyl radical-mediated mechanisms contribute to the oxidant generation and the process is more efficient. In the same way, the contribution of hydroxyl radicals may also help to explain the significant influence of the substrate concentration. Thus, the coexistence of both phosphate and hydroxyl radicals is required to ensure the generation of significant amounts of peroxomonophosphoric acid
Optimizing Integrated Information with a Prior Guided Random Search Algorithm
Integrated information theory (IIT) is a theoretical framework that provides
a quantitative measure to estimate when a physical system is conscious, its
degree of consciousness, and the complexity of the qualia space that the system
is experiencing. Formally, IIT rests on the assumption that if a surrogate
physical system can fully embed the phenomenological properties of
consciousness, then the system properties must be constrained by the properties
of the qualia being experienced. Following this assumption, IIT represents the
physical system as a network of interconnected elements that can be thought of
as a probabilistic causal graph, , where each node has an
input-output function and all the graph is encoded in a transition probability
matrix. Consequently, IIT's quantitative measure of consciousness, , is
computed with respect to the transition probability matrix and the present
state of the graph. In this paper, we provide a random search algorithm that is
able to optimize in order to investigate, as the number of nodes
increases, the structure of the graphs that have higher . We also provide
arguments that show the difficulties of applying more complex black-box search
algorithms, such as Bayesian optimization or metaheuristics, in this particular
problem. Additionally, we suggest specific research lines for these techniques
to enhance the search algorithm that guarantees maximal
An expert system for checking the correctness of memory systems using simulation and metamorphic testing
During the last few years, computer performance has reached a turning point where computing
power is no longer the only important concern. This way, the emphasis is shifting from an
exclusive focus on the optimisation of the computing system to optimising other systems, like
the memory system. Broadly speaking, testing memory systems entails two main challenges: the
oracle problem and the reliable test set problem. The former consists in deciding if the outputs
of a test suite are correct. The latter refers to providing an appropriate test suite for determining
the correctness of the system under test.
In this paper we propose an expert system for checking the correctness of memory systems.
In order to face these challenges, our proposed system combines two orthogonal techniques
– simulation and metamorphic testing – enabling the automatic generation of appropriate test
cases and deciding if their outputs are correct. In contrast to conventional expert systems, our
system includes a factual database containing the results of previous simulations, and a simulation
platform for computing the behaviour of memory systems. The knowledge of the expert is
represented in the form of metamorphic relations, which are properties of the analysed system
involving multiple inputs and their outputs. Thus, the main contribution of this work is two-fold:
a method to automatise the testing process of memory systems, and a novel expert system design
focusing on increasing the overall performance of the testing process.
To show the applicability of our system, we have performed a thorough evaluation using
500 memory configurations and 4 di erent memory management algorithms, which entailed
the execution of more than one million of simulations. The evaluation used mutation testing,
injecting faults in the memory management algorithms. The developed expert system was able
to detect over 99% of the critical injected faults, hence obtaining very promising results, and
outperforming other standard techniques like random testingThis work was supported by the Spanish Ministerio de Economía, Industria y Competitividad, Gobierno de España/FEDER (grant numbers DArDOS, TIN2015-65845-C3-1-R and FAME, RTI2018-093608-B-C31) and the Comunidad de Madrid project FORTE under Grant S2018/TCS-4314. The first author is also supported by the Universidad Complutense de Madrid - Santander Universidades grant (CT17/17-CT18/17
Validating communication network configurations in cloud and HPC systems using Metamorphic Testing
Funding: This work was supported by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with the Complutense University as part of the Program to Stimulate Research for Young Doctors in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) under grant PR65/19-22452, the Spanish MINECO/FEDER project MASSIVE under Grant RTI2018-095255-B-I00, the Comunidad de Madrid project FORTE-CM under grant S2018/TCS-4314, and project S2018/TCS-4339 (BLOQUES-CM) co-funded by EIE Funds of the European Union and Comunidad de Madrid.During the last years, the fast evolution of computers and networks has led to the creation of a wide variety of services that have changed the way we live, like video streaming, on-line gaming and online shopping. These services are supported by complex systems, which require not only high computational power but high-speed and low-latency networks to fulfil the expected quality requirements. However, a misleading configuration in one of the thousand components that compose these systems may cause performance bottlenecks and functioning disruptions. Unfortunately, conventional testing methods are not adequate for checking these systems since, on many occasions, there does not exist a mechanism to determine if the behaviour of a system is the expected one. Fortunately, Metamorphic Testing is a valuable and promising testing technique that alleviates the two fundamental problems of testing: the oracle problem and the reliable test set problem. In this paper, we combine Metamorphic Testing and simulation techniques for validating communication network con- figurations in HPC systems. For this, we rely on a catalogue of Metamorphic Relations, based on network communications knowledge, for checking its correctness. In addition, we have conducted an experimental study for analysing the communica- tion network of HPC systems. The results show that Metamorphic Testing is appropriate for checking the correctness of communication networks supported by complex topologies in HPC systems.Postprin
Voltage Stability Analysis of Grid-Connected Wind Farms with FACTS: Static and Dynamic Analysis
Recently, analysis of some major blackouts and failures of power system shows that voltage instability problem has been one of the main reasons of these disturbances and networks collapse. In this paper, a systematic approach to voltage stability analysis using various techniques for the IEEE 14-bus case study, is presented. Static analysis is used to analyze the voltage stability of the system under study, whilst the dynamic analysis is used to evaluate the performance of compensators. The static techniques used are Power Flow, V–P curve analysis, and Q–V modal analysis. In this study, Flexible Alternating Current Transmission system (FACTS) devices- namely, Static Synchronous Compensators (STATCOMs) and Static Var Compensators (SVCs) - are used as reactive power compensators, taking into account maintaining the violated voltage magnitudes of the weak buses within the acceptable limits defined in ANSI C84.1. Simulation results validate that both the STATCOMs and the SVCs can be effectively used to enhance the static voltage stability and increasing network loadability margin. Additionally, based on the dynamic analysis results, it has been shown that STATCOMs have superior performance, in dynamic voltage stability enhancement, compared to SVCs
Parallel mutation testing for large scale systems
Mutation testing is a valuable technique for measuring the quality of test suites in terms of detecting faults. However, one
of its main drawbacks is its high computational cost. For this purpose, several approaches have been recently proposed to
speed-up the mutation testing process by exploiting computational resources in distributed systems. However, bottlenecks
have been detected when those techniques are applied in large-scale systems. This work improves the performance of
mutation testing using large-scale systems by proposing a new load distribution algorithm, and parallelising different steps
of the process. To demonstrate the benefits of our approach, we report on a thorough empirical evaluation, which analyses
and compares our proposal with existing solutions executed in large-scale systems. The results show that our proposal
outperforms the state-of-the-art distribution algorithms up to 35% in three different scenarios, reaching a reduction of the
execution time of—at best—up to 99.66%This work was supported by the
Spanish MINECO/FEDER project under Grants PID2021-
122270OB-I00, TED2021-129381B-C21 and PID2019-108528RBC22, the Comunidad de Madrid project FORTE-CM under Grant
S2018/TCS-4314, Project S2018/TCS-4339 (BLOQUES-CM) cofunded by EIE Funds of the European Union and Comunidad de
Madrid and the Project HPC-EUROPA3 (INFRAIA-2016-1-730897),
with the support of the EC Research Innovation Action under the
H2020 Programm
- …